Ir arriba
Información de la Tesis Doctoral

Explainable Artificial Intelligence (XAI) techniques based on partial derivatives with applications to neural networks

Jaime Pizarroso Gonzalo

Dirigida por A. Muñoz, J. Portela

15 de diciembre de 2023

Resumen:

As Machine Learning (ML) and Deep Learning (DL) models continue to permeate various aspects of society, there is an increasing demand for interpretability and transparency in their decision-making processes. This demand is fueled by the need to understand, trust, and effectively use these complex, black-box models, particularly in high-stake applications where decisions can have far-reaching consequences. Furthermore, the advancement of interpretability techniques is critical to adhere to the emerging ethical and legal requirements concerning the use of Artificial Intelligence (AI) systems.

 

Explainable Artificial Intelligence (XAI) has emerged as a promising solution to the opacity of complex models, offering techniques to make their decision-making processes understandable and transparent. Nevertheless, most existing XAI techniques face limitations concerning assumptions on data relationships, computational cost, the trade-off between interpretability and accuracy, and their ability to provide local and global explanations. To address these issues, this thesis introduces novel XAI methods based onpartial derivatives. Unlike existing methods, these techniques provide detailed, local to global level explanations without making assumptions about the relationships between inputs and outputs.

 

The main contributions of this thesis reside in three newly developed methods: Sensitivity Analysis, α-curves, and the application of the Interaction Invariant designed in Alfaya et al. (2023) to ML models, all of which leverage the partial derivatives to offer interpretability of differentiable ML models. Sensitivity Analysis estimates the influence of input variables on the ML model output, offering insights into the most impactful variables. α-curves provide a detailed view of sensitivity variation across the input space, assisting in identifying average and localized high-sensitivity regions. Lastly, Interaction Invariant focuses on detecting interactions between input variables, revealing complex relationships within the data that may influence the model's decision-making process. Collectively, these methods offer a comprehensive understanding of ML models, enhancing the transparency and trustworthiness of AI systems.

 

The utility and effectiveness of these methods were validated through three real-world use cases including predicting NOx emissions, parkinson disease progression, and turbofan engine Remaining Useful Life (RUL). These applications illustrated how the developed methods can reveal nuanced insights into model behavior, surpassing commonly used XAI techniques by providing coherent and relevant information about the inner workings of the models.


Resumen divulgativo:

El incremento del uso de modelos de aprendizaje automático necesita un aumento en la capacidad de interpretar sus decisiones. En esta tesis se han desarrollado tres nuevos métodos de inteligencia artificial explicable (XAI) basados en derivadas parciales para la interpretación de redes neuronales.


Descriptores: Inteligencia Artificial, Análisis de Datos, Análisis Multivariante

Palabras clave: Neural Networks, Interpretability, Explainability, Explainable Artificial Intelligence, Partial Derivatives

Cita:
J. Pizarroso (2023), Explainable Artificial Intelligence (XAI) techniques based on partial derivatives with applications to neural networks. Madrid (España).


Acceso a Repositorio público